74 research outputs found
An Algorithmic Proof of the Lovasz Local Lemma via Resampling Oracles
The Lovasz Local Lemma is a seminal result in probabilistic combinatorics. It
gives a sufficient condition on a probability space and a collection of events
for the existence of an outcome that simultaneously avoids all of those events.
Finding such an outcome by an efficient algorithm has been an active research
topic for decades. Breakthrough work of Moser and Tardos (2009) presented an
efficient algorithm for a general setting primarily characterized by a product
structure on the probability space.
In this work we present an efficient algorithm for a much more general
setting. Our main assumption is that there exist certain functions, called
resampling oracles, that can be invoked to address the undesired occurrence of
the events. We show that, in all scenarios to which the original Lovasz Local
Lemma applies, there exist resampling oracles, although they are not
necessarily efficient. Nevertheless, for essentially all known applications of
the Lovasz Local Lemma and its generalizations, we have designed efficient
resampling oracles. As applications of these techniques, we present new results
for packings of Latin transversals, rainbow matchings and rainbow spanning
trees.Comment: 47 page
Optimal Bounds on Approximation of Submodular and XOS Functions by Juntas
We investigate the approximability of several classes of real-valued
functions by functions of a small number of variables ({\em juntas}). Our main
results are tight bounds on the number of variables required to approximate a
function within -error over
the uniform distribution: 1. If is submodular, then it is -close
to a function of variables.
This is an exponential improvement over previously known results. We note that
variables are necessary even for linear
functions. 2. If is fractionally subadditive (XOS) it is -close
to a function of variables. This result holds for all
functions with low total -influence and is a real-valued analogue of
Friedgut's theorem for boolean functions. We show that
variables are necessary even for XOS functions.
As applications of these results, we provide learning algorithms over the
uniform distribution. For XOS functions, we give a PAC learning algorithm that
runs in time . For submodular functions we give
an algorithm in the more demanding PMAC learning model (Balcan and Harvey,
2011) which requires a multiplicative factor approximation with
probability at least over the target distribution. Our uniform
distribution algorithm runs in time .
This is the first algorithm in the PMAC model that over the uniform
distribution can achieve a constant approximation factor arbitrarily close to 1
for all submodular functions. As follows from the lower bounds in (Feldman et
al., 2013) both of these algorithms are close to optimal. We also give
applications for proper learning, testing and agnostic learning with value
queries of these classes.Comment: Extended abstract appears in proceedings of FOCS 201
- …